1,666 research outputs found

    LORE: A Compound Object Authoring and Publishing Tool for Literary Scholars based on the FRBR

    Get PDF
    4th International Conference on Open RepositoriesThis presentation was part of the session : Conference PresentationsDate: 2009-06-04 10:30 AM – 12:00 PMThis paper presents LORE (Literature Object Re-use and Exchange), a light-weight tool designed to enable scholars and teachers of literature to author, edit and publish OAI-ORE-compliant compound information objects that encapsulate related digital resources and bibliographic records. LORE provides a graphical user interface for creating, labelling and visualizing typed relationships between individual objects using terms from a bibliographic ontology based on the IFLA FRBR. After creating a compound object, users can attach metadata and publish it to a Fedora repository (as an RDF graph) where it can be searched, retrieved, edited and re-used by others. LORE has been developed in the context of the Australian Literature Resource project (AustLit) and hence focuses on compound objects for teaching and research within the Australian literature studies community.NCRIS National eResearch Architecture Taskforce (NeAT

    Estimating Fire Weather Indices via Semantic Reasoning over Wireless Sensor Network Data Streams

    Full text link
    Wildfires are frequent, devastating events in Australia that regularly cause significant loss of life and widespread property damage. Fire weather indices are a widely-adopted method for measuring fire danger and they play a significant role in issuing bushfire warnings and in anticipating demand for bushfire management resources. Existing systems that calculate fire weather indices are limited due to low spatial and temporal resolution. Localized wireless sensor networks, on the other hand, gather continuous sensor data measuring variables such as air temperature, relative humidity, rainfall and wind speed at high resolutions. However, using wireless sensor networks to estimate fire weather indices is a challenge due to data quality issues, lack of standard data formats and lack of agreement on thresholds and methods for calculating fire weather indices. Within the scope of this paper, we propose a standardized approach to calculating Fire Weather Indices (a.k.a. fire danger ratings) and overcome a number of the challenges by applying Semantic Web Technologies to the processing of data streams from a wireless sensor network deployed in the Springbrook region of South East Queensland. This paper describes the underlying ontologies, the semantic reasoning and the Semantic Fire Weather Index (SFWI) system that we have developed to enable domain experts to specify and adapt rules for calculating Fire Weather Indices. We also describe the Web-based mapping interface that we have developed, that enables users to improve their understanding of how fire weather indices vary over time within a particular region.Finally, we discuss our evaluation results that indicate that the proposed system outperforms state-of-the-art techniques in terms of accuracy, precision and query performance.Comment: 20pages, 12 figure

    The Application of Metadata Standards to Multimedia in Museums

    Get PDF
    This paper first describes the application of a multi-level indexing approach, based on Dublin Core extensions and the Resource Description Framework (RDF), to a typical museum video. The advantages and disadvantages of this approach are discussed in the context of the requirements of the proposed MPEG-7 ("Multimedia Content Description Interface") standard. The work on SMIL (Synchronized Multimedia Integration Language) by the W3C SYMM working group is then described. Suggestions for how this work can be applied to video metadata are made. Finally a hybrid approach is proposed based on the combined use of Dublin Core and the currently undefined MPEG-7 standard within the RDF which will provide a solution to the problem of satisfying widely differing user requirements

    Rights Markup Extensions for the Protection of Indigenous Knowledge

    Get PDF
    Indigenous cultures have experienced a renaissance over the past 5-10 years as indigenous communities have recognized the importance of documenting and sharing their cultural heritage and history. This has coincided with the explosion of the internet and the widespread application of multimedia technologies to the construction of large online cultural collections. Together these developments have triggered a demand for copyright protection mechanisms. A number of XML-based markup languages (XrML, ODRL) have been developed to support the expression of rights asssociated with the intellectual property of resources. The MPEG-21 Multimedia Framework standard being developed by the Moving Picture Experts Group (MPEG) aims to standardize such a language to enable the management and protection of intellectual property associated with multimedia content. However it has been widely recognized that modern intellectual property laws, which are rapidly assuming global uniformity, fail to protect indigenous knowledge adequately or to support traditional or customary laws governing rights over indigenous knowledge. This paper considers some of the requirements for the protection of indigenous knowledge and the enforcement of tribal customary laws associated with knowledge, which have been expressed by Australian Aboriginal and Torres Strait Islander communities. It assesses the ability of the two major XML-based rights markup languages (XrML and ODRL) to satisfy these requirements and suggests extensions to these languages to improve their support for indigenous knowledge protection. The aim of this paper is to provide a starting point which will encourage input, feedback and suggestions from indigenous communities. This will enable a clearer understanding of their diverse requirements with respect to the protection of intellectual property and traditional knowledge and the development of a satisfactory solution through future collaboration and consultation. Given a standardized machine-understandable representation of rights information, the utopian dream of trusted systems - automated rights enforcement and secure transactions involving both indigenous and non-indigenous resources - moves one step closer. But more importantly, the recognition of customary law and the rights of indigenous cultures within such systems, will lead to greater cross-cultural understanding, respect and tolerance and the promotion of indigenous social, cultural and economic development

    The Open Archives Initiative Object Reuse and Exchange (OAI-ORE)

    No full text
    The Open Archives Initiative Object Reuse and Exchange (OAI-ORE) is a new collaborative international initiative, funded for the next 2 years by the Mellon Foundation, focusing on an interoperability framework for the exchange of information about Digital Objects between cooperating repositories, registries and services. The initiative's web site is available at http://openarchives.org/ore. A primary aim of the OAI-ORE is to support the creation, management and dissemination of the new forms of composite digital resources being produced by eResearch and eScholarship (i.e., multi-part, multi-media, distributed and service-oriented rather than single file-based objects). As more scientific communities begin to publish their raw data sets, experimental details, analytical methods and visualisations, in addition to the traditional scholarly publications - the problem of describing such compound objects for exchange and re-use will become critical. It is the aim of OAI-ORE to extend the work of OAI-PMH to support the metadata requirements for the exchange and re-use of such complex resources. This paper will present an overview of the OAI-ORE, its aims and objectives, workplan and current and anticipated deliverables. It will also demonstrate the applicability of OAI-ORE to eResearch through a number of case studies that involve the generation, analysis, re-use and exchange of compound digital objects within scientific disciplines

    MetaNet: a metadata term thesaurus to enable semantic interoperability between metadata domains

    Get PDF
    Metadata interoperability is a fundamental requirement for access to information within networked knowledge organization systems. The Harmony International Digital Library Project [1] has developed a common underlying data model (the ABC model) to enable the scalable mapping of metadata descriptions across domains and media types. The ABC model, described in [2], provides a set of basic building blocks for metadata modeling and recognizes the importance of 'events' to unambiguously describe metadata for objects with a complex history. In order to test and evaluate the interoperability capabilities of this model, we applied it to some real multimedia examples and analysed the results of mapping from the ABC model to various different metadata domains using XSLT [3]. This work revealed serious limitations in XSLT's ability to support flexible dynamic semantic mapping. In order to overcome this, we developed MetaNet [4], a metadata term thesaurus which provides the additional semantic knowledge which is non-existent within declarative XML-encoded metadata descriptions. This paper describes MetaNet, its RDF Schema [5] representation and a hybrid mapping approach which combines the structural and syntactic mapping capabilities of XSLT with the semantic knowledge of MetaNet, to enable flexible and dynamic mapping among metadata standards

    Harvesting community tags and annotations to augment institutional repository metadata

    No full text
    One of the greatest challenges facing managers of institutional repositories today is the cost of providing high quality, precise metadata that satisfies the search requirements of their many different user groups. Social tagging systems such as Flickr, del.icio.us, Connotea and You.tube enable communities to tag photos, web pages, scientific publications and videos with organically-evolved, community relevant vocabularies and to share their tags through the Web. But is there a way that repository managers can exploit these new community tagging movements to enhance their collections’ metadata? If users are provided with simple tagging services, can they be encouraged to generate meaningful, useful metadata that can then be harvested and exploited? This presentation will describe a number of semantic tagging and annotation services that we have developed for open repositories of social sciences and humanities data (indigenous collections, linguistic recordings, publications). It will also discuss possible solutions to the associated social and technical challenges that include: motivating users to attach annotations; ensuring quality control and authentication of the annotations; techniques for harvesting meaningful useful metadata (using OAI PMH); exploiting the secondary metadata to improve the search and browse capabilities over the repositories; differentiating between primary and secondary metadata in the presentation of search results

    Scientific Publication Packages: A Selective Approach to the Communication and Archival of Scientific Output

    Get PDF
    The use of digital technologies within research has led to a proliferation of data, many new forms of research output and new modes of presentation and analysis. Many scientific communities are struggling with the challenge of how to manage the terabytes of data and new forms of output, they are producing. They are also under increasing pressure from funding organizations to publish their raw data, in addition to their traditional publications, in open archives. In this paper I describe an approach that involves the selective encapsulation of raw data, derived products, algorithms, software and textual publications within "scientific publication packages". Such packages provide an ideal method for: encapsulating expert knowledge; for publishing and sharing scientific process and results; for teaching complex scientific concepts; and for the selective archival, curation and preservation of scientific data and output. They also provide a bridge between technological advances in the Digital Libraries and eScience domains. In particular, I describe the RDF-based architecture that we are adopting to enable scientists to construct, publish and manage "scientific publication packages" - compound digital objects that encapsulate and relate the raw data to its derived products, publications and the associated contextual, provenance and administrative metadata

    Scientific Models: A User-oriented Approach to the Integration of Scientific Data and Digital Libraries

    Get PDF
    Many scientific communities are struggling with the challenge of how to manage the terabytes of data they are producing, often on a daily basis. Scientific models are the primary method for representing and encapsulating expert knowledge in many disciplines. Scientific models could also provide a mechanism: for publishing and sharing scientific results; for teaching complex scientific concepts; and for the selective archival, curation and preservation of scientific data. As such, they also provide a bridge for collaboration between Digital Libraries and eScience. In this paper I describe research being undertaken within the FUSION project at the University of Queensland to enable scientists to construct, publish and manage scientific model packages that encapsulate and relate the raw data to its associated contextual and provenance metadata, processing steps, derived information and publications. This work involves extending tools and services that have come out of the Digital Libraries domain to support e-Science requirements

    Inductive versus deductive methods in word analysis in grade five.

    Full text link
    Thesis (Ed.D.)--Boston University
    • …
    corecore